Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Deep learning-based joint channel estimation and equalization algorithm for C-V2X communications
CHEN Chengrui, SUN Ning, HE Shibiao, LIAO Yong
Journal of Computer Applications    2021, 41 (9): 2687-2693.   DOI: 10.11772/j.issn.1001-9081.2020111779
Abstract376)      PDF (1086KB)(422)       Save
In order to effectively improve the Bit Error Rate (BER) performance of communication system without significantly increasing the computational complexity, a deep learning based joint channel estimation and equalization algorithm named V-EstEqNet was proposed for Cellular-Vehicle to Everything (C-V2X) communication system by using the powerful ability of deep learning in data processing. Different from the traditional algorithms, in which channel estimation and equalization in the communication system reciever were carried out in two stages respectively, V-EstEqNet considered them jointly, and used the deep learning network to directly correct and restore the received data, so that the channel equalization was completed without explicit channel estimation. Specifically, a large number of received data were used to train the network offline, so that the channel characteristics superimposed on the received data were learned by the network, and then these characteristics were utilized to recover the original transmitted data. Simulation results show that the proposed algorithm can track channel characteristics more effectively in different speed scenarios. At the same time, compared with the traditional channel estimation algorithms (Least Squares (LS) and Linear Minimum Mean Square Error (LMMSE)) combining with the traditional channel equalization algorithms (Zero Forcing (ZF) equalization algorithm and Minimum Mean Square Error (MMSE) equalization algorithm), the proposed algorithm has a maximum BER gain of 6 dB in low-speed environment and 9 dB in high-speed environment.
Reference | Related Articles | Metrics
CNN model compression based on activation-entropy based layer-wise iterative pruning strategy
CHEN Chengjun, MAO Yingchi, WANG Yichao
Journal of Computer Applications    2020, 40 (5): 1260-1265.   DOI: 10.11772/j.issn.1001-9081.2019111977
Abstract311)      PDF (718KB)(382)       Save

Since the existing pruning strategies of the Convolutional Neural Network (CNN) model are different and have general effects, an Activation-Entropy based Layer-wise Iterative Pruning (AE-LIP) strategy was proposed to reduce the parameter amount of the model while ensuring the accuracy of the model within a controllable range. Firstly, combined with the neuronal activation value and information entropy, a weight evaluation criteria based on activation-entropy was constructed, and the weight importance score was calculated. Secondly, the pruning was performed layer by layer, the weights were sorted according to the importance score, and the pruning number in each layer was combined to filter out the weights to be pruned and set them to zero. Finally, the model was fine-tuned, and the above process was repeated until the iteration ended. The experimental results show that the activation-entropy based layer-wise iterative pruning strategy makes the AlexNet model compressed 87.5%, and the corresponding accuracy is reduced by 2.12 percentage points, which is 1.54 percentage points higher than that of the magnitude-based weight pruning strategy and 0.91 percentage points higher than that of the correlation-based weight pruning strategy; the strategy makes VGG-16 model compressed 84.1%, and the corresponding accuracy is reduced by 2.62 percentage points, which is 0.62 and 0.27 percentage points higher than those of the two above strategies. It can be seen that the proposed strategy reduces the size of the CNN model effectively while ensuring the accuracy of the model, and is helpful for the deployment of CNN model on mobile devices with limited storage.

Reference | Related Articles | Metrics
Traffic scheduling strategy based on improved Dijkstra algorithm for power distribution and utilization communication network
XIANG Min, CHEN Cheng
Journal of Computer Applications    2018, 38 (6): 1715-1720.   DOI: 10.11772/j.issn.1001-9081.2017112825
Abstract379)      PDF (939KB)(374)       Save
Concerning the problem of being easy to generate congestion during data aggregation in power distribution and utilization communication network, a novel hybrid edge-weighted traffic scheduling and routing algorithm was proposed. Firstly, the hierarchical node model was established according to the number of hops. Then, the priorities of power distribution and utilization services and node congestion levels were divided. Finally, the edge weights were calculated based on the comprehensive index of hop number, traffic load rate and link utilization ratio. The nodes of traffic scheduling were needed for routing selection according to the improved Dijkstra algorithm, and the severe congestion nodes were also scheduled in accordance with the priorities of power distribution and utilization services. Compared with Shortest Path Fast (SPF) algorithm and Greedy Backpressure Routing Algorithm (GBRA), when the data generation rate is 80 kb/s, the packet loss rate of emergency service by using the proposed algorithm is reduced by 81.3% and 67.7% respectively, and the packet loss rate of key service is reduced by 79% and 63.8% respectively. The simulation results show that, the proposed algorithm can effectively alleviate network congestion, improve the effective throughput of network, reduce the end-to-end delay of network and the packet loss rate of high priority service.
Reference | Related Articles | Metrics
Improved differential fault attack on scalar multiplication algorithm in elliptic curve cryptosystem
XU Shengwei, CHEN Cheng, WANG Rongrong
Journal of Computer Applications    2016, 36 (12): 3328-3332.   DOI: 10.11772/j.issn.1001-9081.2016.12.3328
Abstract744)      PDF (785KB)(498)       Save
Concerning the failure problem of fault attack on elliptic curve scalar multiplication algorithm, an improved algorithm of differential fault attack was proposed. The nonzero assumption was eliminated, and an authentication mechanism was imported against the failure threat of "fault detection". Using the elliptic curve provided by SM2 algorithm, the binary scalar multiplication algorithm, binary Non-Adjacent Form (NAF) scalar multiplication algorithm and Montgomery scalar multiplication algorithm were successfully attacked with software simulation. The 256-bit private key was restored in three hours. The attacking process of binary NAF scalar multiplication algorithm was optimized, so the attack time was reduced to one fifth of the original one. The experimental results show that the proposed algorithm can improve the effectiveness of the attack.
Reference | Related Articles | Metrics
PageRank parallel algorithm based on Web link classification
CHEN Cheng, ZHAN Yinwei, LI Ying
Journal of Computer Applications    2015, 35 (1): 48-52.   DOI: 10.11772/j.issn.1001-9081.2015.01.0048
Abstract871)      PDF (740KB)(683)       Save

Concerning the problem that the efficiency of serial PageRank algorithm is low in dealing with mass Web data, a PageRank parallel algorithm based on Web link classification was proposed. Firstly, the Web was classified according to its Web link, and the weights of different Web which was from diverse websites were set variously. Secondly, with the Hadoop parallel computation platform and MapReduce which has the characteristics of dividing and conquering, the Webpage ranks were computed parallel. At last, a data compression method of three layers including data layer, pretreatment layer and computation layer was adopted to optimize the parallel algorithm. The experimental results show that, compared with the serial PageRank algorithm, the accuracy of the proposed algorithm is improved by 12% and the efficiency is improved by 33% in the best case.

Reference | Related Articles | Metrics
DPST: a scheduling algorithm of preventing slow task thrashing in heterogeneous environment
DUAN Han-cong LI Jun-jie CHEN Cheng LI Lin
Journal of Computer Applications    2012, 32 (07): 1910-1912.   DOI: 10.3724/SP.J.1087.2012.01910
Abstract958)      PDF (617KB)(625)       Save
With regard to the thrashing problem of load-balancing algorithm in heterogeneous environments, a new scheduling algorithm called Dynamic Predetermination of Slow Task (DPST) was designed to reduce the probability in slow task scheduling and improve load-balancing. Through defining capability measure of heterogeneous task in heterogeneous nodes, the capacity of nodes which performed heterogeneous tasks was normalized. With the introduction of predetermination, thrashing result from heterogeneous environments was reduced. By using double queues of slow task and slow node, the efficiency of scheduling was improved. The experimental results show that the thrashing times in heterogeneous environments fell by more than 40% compared with Hadoop. Because thrashing times have been reduced effectively, DPST algorithm has better performance in reducing average response time and increasing system throughput in heterogeneous environments.
Reference | Related Articles | Metrics